Search Results: "kevin"

8 February 2016

Lunar: Reproducible builds: week 41 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes After remarks from Guillem Jover, Lunar updated his patch adding generation of .buildinfo files in dpkg.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: dracut, ent, gdcm, guilt, lazarus, magit, matita, resource-agents, rurple-ng, shadow, shorewall-doc, udiskie. The following packages became reproducible after getting fixed:
  • disque/1.0~rc1-5 by Chris Lamb, noticed by Reiner Herrmann.
  • dlm/4.0.4-2 by Ferenc W gner.
  • drbd-utils/8.9.6-1 by Apollon Oikonomopoulos.
  • java-common/0.54 by by Emmanuel Bourg.
  • libjibx1.2-java/1.2.6-1 by Emmanuel Bourg.
  • libzstd/0.4.7-1 by Kevin Murray.
  • python-releases/1.0.0-1 by Jan Dittberner.
  • redis/2:3.0.7-2 by Chris Lamb, noticed by Reiner Herrmann.
  • tetex-brev/4.22.github.20140417-3 by Petter Reinholdtsen.
Some uploads fixed some reproducibility issues, but not all of them:
  • anarchism/14.0-4 by Holger Levsen.
  • hhvm/3.11.1+dfsg-1 by Faidon Liambotis.
  • netty/1:4.0.34-1 by Emmanuel Bourg.
Patches submitted which have not made their way to the archive yet:
  • #813309 on lapack by Reiner Herrmann: removes the test log and sorts the files packed into the static library locale-independently.
  • #813345 on elastix by akira: suggest to use the $datetime placeholder in Doxygen footer.
  • #813892 on dietlibc by Reiner Herrmann: remove gzip headers, sort md5sums file, and sort object files linked in static libraries.
  • #813912 on git by Reiner Herrmann: remove timestamps from documentation generated with asciidoc, remove gzip headers, and sort md5sums and tclIndex files.

reproducible.debian.net For the first time, we've reached more than 20,000 packages with reproducible builds for sid on amd64 with our current test framework. Vagrant Cascadian has set up another test system for armhf. Enabling four more builder jobs to be added to Jenkins. (h01ger)

Package reviews 233 reviews have been removed, 111 added and 86 updated in the previous week. 36 new FTBFS bugs were reported by Chris Lamb and Alastair McKinstry. New issue: timestamps_in_manpages_generated_by_yat2m. The description for the blacklisted_on_jenkins issue has been improved. Some packages are also now tagged with blacklisted_on_jenkins_armhf_only.

Misc. Steven Chamberlain gave an update on the status of FreeBSD and variants after the BSD devroom at FOSDEM 16. He also discussed how jails can be used for easier and faster reproducibility tests. The video for h01ger's talk in the main track of FOSDEM 16 about the reproducible ecosystem is now available.

17 December 2015

Alberto Garc a: Improving disk I/O performance in QEMU 2.5 with the qcow2 L2 cache

QEMU 2.5 has just been released, with a lot of new features. As with the previous release, we have also created a video changelog. I plan to write a few blog posts explaining some of the things I have been working on. In this one I m going to talk about how to control the size of the qcow2 L2 cache. But first, let s see why that cache is useful. The qcow2 file format qcow2 is the main format for disk images used by QEMU. One of the features of this format is that its size grows on demand, and the disk space is only allocated when it is actually needed by the virtual machine. A qcow2 file is organized in units of constant size called clusters. The virtual disk seen by the guest is also divided into guest clusters of the same size. QEMU defaults to 64KB clusters, but a different value can be specified when creating a new image: qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G In order to map the virtual disk as seen by the guest to the qcow2 image in the host, the qcow2 image contains a set of tables organized in a two-level structure. These are called the L1 and L2 tables. There is one single L1 table per disk image. This table is small and is always kept in memory. There can be many L2 tables, depending on how much space has been allocated in the image. Each table is one cluster in size. In order to read or write data to the virtual disk, QEMU needs to read its corresponding L2 table to find out where that data is located. Since reading the table for each I/O operation can be expensive, QEMU keeps a cache of L2 tables in memory to speed up disk access.
The L2 cache can have a dramatic impact on performance. As an example, here s the number of I/O operations per second that I get with random read requests in a fully populated 20GB disk image:
L2 cache size Average IOPS
1 MB 5100
1,5 MB 7300
2 MB 12700
2,5 MB 63600
If you re using an older version of QEMU you might have trouble getting the most out of the qcow2 cache because of this bug, so either upgrade to at least QEMU 2.3 or apply this patch. (in addition to the L2 cache, QEMU also keeps a refcount cache. This is used for cluster allocation and internal snapshots, but I m not covering it in this post. Please refer to the qcow2 documentation if you want to know more about refcount tables) Understanding how to choose the right cache size In order to choose the cache size we need to know how it relates to the amount of allocated space. The amount of virtual disk that can be mapped by the L2 cache (in bytes) is: disk_size = l2_cache_size * cluster_size / 8 With the default values for cluster_size (64KB) that is disk_size = l2_cache_size * 8192 So in order to have a cache that can cover n GB of disk space with the default cluster size we need l2_cache_size = disk_size_GB * 131072 QEMU has a default L2 cache of 1MB (1048576 bytes) so using the formulas we ve just seen we have 1048576 / 131072 = 8 GB of virtual disk covered by that cache. This means that if the size of your virtual disk is larger than 8 GB you can speed up disk access by increasing the size of the L2 cache. Otherwise you ll be fine with the defaults. How to configure the cache size Cache sizes can be configured using the -drive option in the command-line, or the blockdev-add QMP command. There are three options available, and all of them take bytes: There are two things that need to be taken into account:
  1. Both the L2 and refcount block caches must have a size that is a multiple of the cluster size.
  2. If you only set one of the options above, QEMU will automatically adjust the others so that the L2 cache is 4 times bigger than the refcount cache.
This means that these three options are equivalent: -drive file=hd.qcow2,l2-cache-size=2097152
-drive file=hd.qcow2,refcount-cache-size=524288
-drive file=hd.qcow2,cache-size=2621440
Although I m not covering the refcount cache here, it s worth noting that it s used much less often than the L2 cache, so it s perfectly reasonable to keep it small: -drive file=hd.qcow2,l2-cache-size=4194304,refcount-cache-size=262144 Reducing the memory usage The problem with a large cache size is that it obviously needs more memory. QEMU has a separate L2 cache for each qcow2 file, so if you re using many big images you might need a considerable amount of memory if you want to have a reasonably sized cache for each one. The problem gets worse if you add backing files and snapshots to the mix. Consider this scenario:
Here, hd0 is a fully populated disk image, and hd1 a freshly created image as a result of a snapshot operation. Reading data from this virtual disk will fill up the L2 cache of hd0, because that s where the actual data is read from. However hd0 itself is read-only, and if you write data to the virtual disk it will go to the active image, hd1, filling up its L2 cache as a result. At some point you ll have in memory cache entries from hd0 that you won t need anymore because all the data from those clusters is now retrieved from hd1. Let s now create a new live snapshot:
Now we have the same problem again. If we write data to the virtual disk it will go to hd2 and its L2 cache will start to fill up. At some point a significant amount of the data from the virtual disk will be in hd2, however the L2 caches of hd0 and hd1 will be full as a result of the previous operations, even if they re no longer needed. Imagine now a scenario with several virtual disks and a long chain of qcow2 images for each one of them. See the problem? I wanted to improve this a bit so I was working on a new setting that allows the user to reduce the memory usage by cleaning unused cache entries when they are not being used. This new setting is available in QEMU 2.5, and is called cache-clean-interval . It defines an interval (in seconds) after which all cache entries that haven t been accessed are removed from memory. This example removes all unused cache entries every 15 minutes: -drive file=hd.qcow2,cache-clean-interval=900 If unset, the default value for this parameter is 0 and it disables this feature. Further information In this post I only intended to give a brief summary of the qcow2 L2 cache and how to tune it in order to increase the I/O performance, but it is by no means an exhaustive description of the disk format. If you want to know more about the qcow2 format here s a few links: Acknowledgments My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the invaluable help of the QEMU development team.
Enjoy QEMU 2.5!

16 November 2015

Dirk Eddelbuettel: Rcpp 0.12.2: More refinements

The second update in the 0.12.* series of Rcpp is now on the CRAN network for GNU R. As usual, I will also push a Debian package. This follows the 0.12.0 release from late July which started to add some serious new features, and builds upon the 0.12.1 release in September. It also marks the sixth release this year where we managed to keep a steady bi-montly release frequency. Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 512 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by more than fifty package from the last release in September (and we recently blogged about crossing 500 dependents). This release once again features pull requests from two new contributors with Nathan Russell and Tianqi Chen joining in. As shown below, other recent contributors (such as such as Dan) are keeping at it too. Keep'em coming! Luke Tierney also email about a code smell he spotted and which we took care of. A big Thank You! to everybody helping with code, bug reports or documentation. See below for a detailed list of changes extracted from the NEWS file.
Changes in Rcpp version 0.12.2 (2015-11-14)
  • Changes in Rcpp API:
    • Correct return type in product of matrix dimensions (PR #374 by Florian)
    • Before creating a single String object from a SEXP, ensure that it is from a vector of length one (PR #376 by Dirk, fixing #375).
    • No longer use STRING_ELT as a left-hand side, thanks to a heads-up by Luke Tierney (PR #378 by Dirk, fixing #377).
    • Rcpp Module objects are now checked more carefully (PR #381 by Tianqi, fixing #380)
    • An overflow in Matrix column indexing was corrected (PR #390 by Qiang, fixing a bug reported by Allessandro on the list)
    • Nullable types can now be assigned R_NilValue in function signatures. (PR #395 by Dan, fixing issue #394)
    • operator<<() now always shows decimal points (PR #396 by Dan)
    • Matrix classes now have a transpose() function (PR #397 by Dirk fixing #383)
    • operator<<() for complex types was added (PRs #398 by Qiang and #399 by Dirk, fixing #187)
  • Changes in Rcpp Attributes:
    • Enable export of C++ interface for functions that return void.
  • Changes in Rcpp Sugar:
    • Added new Sugar function cummin(), cummax(), cumprod() (PR #389 by Nathan Russell fixing #388)
    • Enabled sugar math operations for subsets; e.g. x[y] + x[z]. (PR #393 by Kevin and Qiang, implementing #392)
  • Changes in Rcpp Documentation:
    • The NEWS file now links to GitHub issue tickets and pull requests.
    • The Rcpp.bib file with bibliographic references was updated.
Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

4 February 2015

Charles Plessy: News of the package mime-support.

The package mime-support is installed by default on Debian systems. It has two roles: first to provide the file /etc/mime.types that associates media types (formerly called MIME types) to suffixes of file names, and second to provide the mailcap system that associates media types with programs. I adopted this package at the end of the development cycle of Wheezy. Changes since Wheezy. The version distributed in Jessie brings a few additions in /etc/mime.types. Among them, application/vnd.debian.binary-package and text/vnd.debian.copyright, which as their name suggest describe two file formats designed by Debian. I registered these types to the IANA, which is more open to the addition of new types since the RFC 6838. The biggest change is the automatic extraction of the associations between programs and media types that are declared in the menu files in FreeDesktop format. Before, it was the maintainer of the Debian package who had to extract this information and translate it in mailcap format by hand. The automation is done via dpkg triggers. A big thank you to Kevin Ryde who gave me a precious help for the developments and corrections to the run-mailcap program, and to all the other contributors. Your help is always welcome! Security updates. In December, Debian has been contacted by Timothy D. Morgan, who found that an attacker could get run-mailcap to execute commands by inserting them in file names (CVE-2014-7209). This first security update for me went well, thanks to the help and instructions of Salvatore Bonaccorso from the Security team. The problem is solved in Wheezy, Jessie and Sid, as well as in Squeeze through its long term support. One of the consequences of this security update is that run-mailcap will systematically use the absolute path to the files to open. For harmless files, this is a bit ugly. This will perhaps be improved after Jessie is released. Future projects The file /etc/mime.types is kept up to date by hand; this is slow and inefficient. The package shared-mime-info contains similar information, that could be used to autogenerate this file, but that would require to parse a XML source that is quite complex. For the moment, I am considering to import Fedora's mailcap package, where the file /etc/mime.types is very well kept up to date. I have not yet decided how to do it, but maybe just by moving that file from one package to the other. In that case, we would have the mime-support package that would provide mailcap support, and the package whose source is Fedora's mailcap package who would provide /etc/mime.types. Perhaps it will be better to use clearer names, such as mailcap-support for the first and media-types for the second? Separating the two main functionalities of mime-support would have an interesting consequence: the possibility of not installing the support for the mailcap system, or to make it optional, and instead to use the FreeDesktop sytem (xdg-open), from the package xdg-utils. Something to keep in mind...

28 June 2014

Russ Allbery: Review: The Emergency Sasquatch Ordinance

Review: The Emergency Sasquatch Ordinance, by Kevin Underhill
Publisher: American Bar Association
Copyright: 2013
ISBN: 1-62722-269-3
Format: Trade paperback
Pages: 334
First, if you have not read Lowering the Bar, you should do so. There aren't many blogs where I will start reading, end up reading numerous posts out loud to anyone in the vicinity, and then find myself systematically reading the entire backlog (which, in this case, is substantial). Kevin Underhill is a lawyer with civil libertarian leanings, a wicked sense of humor, and a knack for finding the most absurd legal stories. He varies between weary incredulity and caustic sarcasm, almost always manages to make me smile, and often makes me laugh out loud. The Emergency Sasquatch Ordinance, and Other Real Laws that Human Beings Have Actually Dreamed Up, Enacted, and Sometimes Even Enforced is, as one might expect, more of the same sort of thing that one gets from Lowering the Bar. However, unlike a lot of books released by long-standing bloggers, it is not a collection of previously published material. This is an entirely new collection of legal absurdities, drawing on history, US federal, state, and local law, and some non-US laws, and complete with Underhill's trademark commentary. Long-time readers of Lowering the Bar have seen several posts about this sort of thing before, which also means there's a great sample up on the web. See posts tagged "Law (Dumb)" for a preview of the sort of thing you'll get, although as mentioned the material in the book is original. My one complaint about the book version of this post type is that I think the blog posts tend to be longer and contain a bit more analysis. Underhill gets in about one punch line per law in the book, and I frequently wished for a somewhat longer analysis. The blog posts tend to get funnier and more biting once he works up a head of steam on a topic. The book is, alas, a bit uniform in its style. It's composed of a large number of "chapters," each of which is usually a page or two and covers a single law. Those are divided into ancient, pre-modern, US federal, US state, US city, and non-US laws, but with considerably funnier section headings than those. Each chapter has a title that's usually a brief summary of the law (often funny), an introduction, a quote (or several), and a punch line. There's not much variation, which makes it good bathroom reading you can pick it up anywhere, read a few pages, and put it back down without fear of losing context but lacks variety when read in large gulps. I find Lowering the Bar itself funnier than this book. I think that's because the blog posts are longer, more varied, and mix weird laws with caustic political commentary and entertaining stories of the extremely ill-advised things people do around, within, and against the law. The odd laws are entertaining; the stories of defendants, the accounts of weird legal proceedings, and Underhill's exasperated sarcasm about legal and political stupidities are even funnier. If you had to choose between the book and the blog, I'd go with the blog. That said, you don't have to choose, and if you've been reading the blog for a while, more material is always welcome. A book also has the substantial advantage of being a way to throw some money at a blogger as a reward for hours of entertainment, with a bonus of getting an enjoyable physical object in return. I certainly don't regret the purchase, and I've already had an opportunity to mention Millard Fillmore's State of the Union address on guano. I'm not sure if that means the book made me smarter, but I do believe it made me more entertaining. TL,DR version: Read Lowering the Bar. If you like that, there's this book. I think you can figure out the rest from there. Rating: 7 out of 10

18 June 2014

Matthias Klumpp: Tanglu 2 (Bartholomea annulata) status update #1

bartholomea-annulata

Bartholomea annulata (c) Kevin Bryant

It is time for a new Tanglu update, which has been overdue for a long time now! Many things happened in Tanglu development, so here is just a short overview of what was done in the past months. Infrastructure Debile The whole Tanglu distribution is now built with Debile, replacing Jenkins, which was difficult to use for package building purposes (although Jenkins is great for other things). You can see the Tanglu builders in action at buildd.tg.o. The migration to Debile took a lot of time (a lot more than expected), and blocked the Bartholomea development at the beginning, but now it is working smoothly. Many thanks to all people who have been involved with making Debile work for Tanglu, especially Jon Severinsson. And of course many thanks to the Debile developers for helping with the integration, Sylvestre Ledru and of course Paul Tagliamonte. Archive Server Migration Those who read the tanglu-announce mailinglist know this already: We moved the main archive server stuff at archive.tg.o to to a new location, and to a very powerful machine. We also added some additional security measures to it, to prevent attacks. The previous machine is now being used for the bugtracker at bugs.tg.o and for some other things, including an archive mirror and the new Tanglu User Forums. See more about that below :-) Transitions There is huge ongoing work on package transitions. Take a look at our transition tracker and the staging migration log to get a taste of it. Merging with Debian Unstable is also going on right now, and we are working on merging some of the Tanglu changes which are useful for Debian as well (or which just reduce the diff to Tanglu) back to their upstream packages. Installer Work on the Tanglu Live-Installer, although badly needed, has not yet been started (it s a task ready for taking by anyone who likes to do it!) however, some awesome progress has been made in making the Debian-Installer work for Tanglu, which allows us to perform minimal installations of the Tanglu base systems and allows easier support of alternative Tanglu falvours. The work on d-i also uncovered a bug which appeared with the latest version of findutils, which has been reported upstream before Debian could run into it. This awesome progress was possible thanks to the work of Philip Mu kovac and Thomas Funk (in really hard debug sessions).
Tanglu ForumsTanglu Users We finally have the long-awaited Tanglu user forums ready! As discussed in the last meeting, a popular demand on IRC and our mailing lists was a forum or Stackexchange-like service for users to communicate, since many people can work better with that than with mailinglists. Therefore, the new English TangluUsers forum is now ready at TangluUsers.org. The forum software is in an alpha version though, so we might experience some bugs which haven t been uncovered in the testing period. We will watch how the software performs and then decide if we stick to it or maybe switch to another one. But so far, we are really happy with the Misago Forums, and our usage of it already led to the inclusion of some patches against Misago. It also is actively maintained and has an active community. Misc Thingstanglu logo pure KDE We will ship with at least KDE Applications 4.13, maybe some 4.14 things as well (if we are lucky, since Tanglu will likely be in feature-freeze when this stuff is released). The other KDE parts will remain on their latest version from the 4.x series. For Tanglu 3, we might update KDE SC 4.x to KDE Frameworks 5 and use Plasma 5 though. GNOME Due to the lack manpower on the GNOME flavor, GNOME will ship in the same version available in Debian Sid maybe with some stuff pulled from Experimental, where it makes sense. A GNOME flavor is planned to be available. Common infrastructure We currently run with systemd 208, but a switch to 210 is planned. Tanglu 2 also targets the X.org server in version 1.16. For more changes, stay tuned. The kernel release for Bartholomea is also not yet decided. Artwork Work on the default Tanglu 2 design has started as well any artwork submissions are most welcome! Tanglu joins the OIN The Tanglu project is now a proud member (licensee) of the Open Invention Network (OIN), which build a pool of defensive patents to protect the Linux ecosystem from companies who are trying to use patents against Linux. Although the Tanglu community does not fully support the generally positive stance the OIN has about software patents, the OIN effort is very useful and we agree with it s goal. Therefore, Tanglu joined the OIN as licensee.
And that s the stuff for now! If you have further questions, just join us on #tanglu or #tanglu-devel on Freenode, or write to our newly created forum! You can, as always, also subscribe to our mailinglists to get in touch.

9 June 2014

Dirk Eddelbuettel: Rcpp 0.11.2

A new minor release 0.11.2 of Rcpp is now on the CRAN network for GNU R, and binaries for Debian have also been uploaded. The release smoothes a few edges on both the Rcpp side itself, as well as on the interaction between Rcpp and R which since release 3.1.0 offers a few new features related to C++11. Kevin added a couple of neat extensions related to vectors, a new ListOf templated list class, as well as a new option to warn on implicit casts. We decided not to make this option the default as it may be too common in some packages. JJ took care of a few buglets related to the wonderful Rcpp Attributes. See the NEWS file section below for details, or the ChangeLog file in the package and on the Rcpp Changelog page. Note that the diffstat reported by CRANberries is very large as Kevin also committed a whitespace cleanup which touched almost all files. As before, we tested this release by building against all CRAN packages which depend upon Rcpp. In fact we did three such runs leading up to the release. Only one package was blacklisted (as I currently don't have CUDA set-up), two had what may be internal errors or tests which were too restrictive, sixteen suffered from missing packages or RGL devices --- but the remaining 202 packages all built and tested cleanly. Detailed results of those tests (and the scripts for it) are in the rcpp-logs repo GitHub. There are a number of other fixes, upgrades and other extensions detailed in NEWS file extract below, in the ChangeLog file in the package and on the Rcpp Changelog page.
Changes in Rcpp version 0.11.2 (2014-06-06
  • Changes in Rcpp API:
    • Implicit conversions, e.g. between NumericVector and IntegerVector, will now give warnings if you use #define RCPP_WARN_ON_COERCE before including the Rcpp headers.
    • Templated List containers, ListOf<T>, have been introduced. When subsetting such containers, the return is assumed to be of type T, allowing code such as ListOf<NumericVector> x; NumericVector y = x[0] + x[1] + x[2].
    • In a number of instances, returned results are protected and/or cast more carefully.
  • Changes in Rcpp Attributes
    • Trailing line comments are now stripped by the attributes parser. This allows the parser to handle C++ source files containing comments inline with function arguments.
    • The USE_CXX1X environment variable is now defined by the cpp11 plugin when R >= 3.1. Two additional plugins have been added for use with C++0x (eg when using g++ 4.6.* as on Windows) as well as C++1y for compilers beginning to support the next revision of the standard; additional fallback is provided for Windows.
    • compileAttributes() now also considers Imports: which may suppress a warning when running Rcpp.package.skeleton().
Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

31 May 2014

Dirk Eddelbuettel: RcppGSL 0.2.1 and 0.2.2

A few days ago, version 0.2.1 of RcppGSL---our interface package between R and the GNU GSL using our Rcpp package for seamless R and C++ integration---appeared on CRAN making it the first release in some time. And it turned that this version tickled an obscure and long-dormant bug under clang which was found on OS X Mavericks---which Kevin Ushey consequently squashed. So this is now reflected in version 0.2.2 which just arrived on CRAN. Besides the bugfix, a few things were modernized to reflect capabilities of Rcpp 0.11.0 and later. The releases also contain a few changes that had accumulated since the previous release in 2012 such as an additional example using B-splines, and use of updated vignette build options provided by R. The NEWS file entries follows below:
Changes in version 0.2.2 (2014-05-31)
  • A subtle bug (tickled only by clang on some OS versions) in vector and matrix view initialization was corrected by Kevin Ushey
Changes in version 0.2.1 (2014-05-26)
  • Added new example based on B-splines example in GSL manual illustrating simple GSL use via Rcpp attributes
  • Vignette compilation has been reverted to using highlight since version 0.4.2 or greater can be used as a vignette engine (with R 3.0.* or later).
  • Vignette compilation is now being done by R CMD build as R 3.0.0 supports different vignette engines, so the vignette build process has been simplified. A convenience helper script has also been added for command-line builds.
  • Unit tests now use sourceCpp() instead of cxxfunction() from the inline package
  • The DESCRIPTION file now uses Suggests: Rcpp (instead of Depends: Rcpp) to permit building of the vignette
  • The package now takes advantage of the simplified build process available with Rcpp (>= 0.11.0)
  • Similar updates to the build process were made for the example package included with RcppGSL
And courtesy of CRANberries, a summaries of changes to release 0.2.0 and this week's release 0.2.1 are available. More information is on the RcppGSL page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

5 May 2014

Dirk Eddelbuettel: RcppArmadillo 0.4.300.0

A new release 4.300 of Armadillo, the templated C++ library for linear algebra, was released by Conrad a few days ago. As usual, this was rolled up in a new RcppArmadillo release 0.4.300.0 which is now on CRAN and in Debian. This release brings nice new upstream changes such as faster find() as well as corresponding functions to find elements which are either finite or non-finite, based on a suggestion we made. This may need an RcppArmadillo specific extension as R extends the IEEE logic to supported NA, NaN and Inf on floating point variables whereas standard C++ (and hence Armadillo) only supports the latter two. We hope to work on this shortly. Kevin also put in some code to make sure unsigned int vectors and matrices are converted correctly, and I updated the short configure script checking for complete LAPACK support to account for the R 3.0.3 or later case.
Changes in RcppArmadillo version 0.4.300.0 (2014-05-04)
  • Upgraded to Armadillo release Version 4.300 (Medieval Cornea Scraper)
    • faster find()
    • added find_finite() and find_nonfinite() for finding indices of finite and non-finite elements
    • expressions X=inv(A)*B*C and X=A.i()*B*C are automatically converted to X=solve(A,B*C)
  • Corrected conversion to unsigned int vectors and matrices
  • Configure script now checks for R version 3.0.3 or newer to enable complex divide-and-conquer SVD in case of R-supplied LAPACK
Courtesy of CRANberries, there is also a diffstat report for the most recent release As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 April 2014

Russ Allbery: Accumulated haul

Wow, it's been a while since I've done this. In part because I've not had much time for reading books (which doesn't prevent me from buying them). Jared Bernstein & Dean Baker Getting Back to Full Employment (non-fiction)
James Coughtrey Six Seconds of Moonlight (sff)
Philip J. Davis & Reuben Hersh The Mathematical Experience (non-fiction)
Debra Dunbar A Demon Bound (sff)
Andy Duncan & Ellen Klages Wakulla Springs (sff)
Dave Eggers & Jordan Bass The Best of McSweeny's (mainstream)
Siri Hustvedt The Blazing World (mainstream)
Jacqueline Koyanagi Ascension (sff)
Ann Leckie Ancillary Justice (sff)
Adam Lee Dark Heart (sff)
Seanan McGuire One Salt Sea (sff)
Seanan McGuire Ashes of Honor (sff)
Seanan McGuire Chimes at Midnight (sff)
Seanan McGuire Midnight Blue-Light Special (sff)
Seanan McGuire Indexing (sff)
Naomi Mitchinson Travel Light (sff)
Helaine Olen Pound Foolish (non-fiction)
Richard Powers Orfeo (mainstream)
Veronica Schanoes Burning Girls (sff)
Karl Schroeder Lockstep (sff)
Charles Stross The Bloodline Feud (sff)
Charles Stross The Traders' War (sff)
Charles Stross The Revolution Trade (sff)
Matthew Thomas We Are Not Ourselves (mainstream)
Kevin Underhill The Emergency Sasquatch Ordinance (non-fiction)
Jo Walton What Makes This Book So Great? (non-fiction) So, yeah. A lot of stuff. I went ahead and bought nearly all of the novels Seanan McGuire had out that I'd not read yet after realizing that I'm going to eventually read all of them and there's no reason not to just own them. I also bought all of the Stross reissues of the Merchant Princes series, even though I had some of the books individually, since I think it will make it more likely I'll read the whole series this way. I have so much stuff that I want to read, but I've not really been in the mood for fiction. I'm trying to destress enough to get back in the mood, but in the meantime have mostly been reading non-fiction or really light fluff (as you'll see from my upcoming reviews). Of that long list, Ancillary Justice is getting a lot of press and looks interesting, and Lockstep is a new Schroeder novel. 'Nuff said. Kevin Underhill is the author of Lowering the Bar, which you should read if you haven't since it's hilarious. I'm obviously looking forward to that. The relatively obscure mainstream novels here are more Powell's Indiespensible books. I will probably cancel that subscription soon, at least for a while, since I'm just building up a backlog, but that's part of my general effort to read more mainstream fiction. (I was a bit disappointed since there were several months with only one book, but the current month finally came with two books again.) Now I just need to buckle down and read. And play video games. And do other things that are fun rather than spending all my time trying to destress from work and zoning in front of the TV.

12 February 2014

Dirk Eddelbuettel: RInside 0.2.11

A new release 0.2.11 of RInside is now on CRAN. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by the Rcpp R and C++ integration package. This release, the first in fourteen months, fixes one important initialization issue created by the recent Rcpp 0.11.0 release, adds a few other changes related to that release and improves a number of small points such as new or improved examples. The NEWS extract below has more details.
Changes in RInside version 0.2.11 (2014-02-11)
  • Updated for Rcpp 0.11.0:
    • Updated initialization by assigning global environment via pointer only after R itself has been initialized with special thanks to Kevin Ushey for the fix
    • Updated DESCRIPTION with Imports: instead of Depends:
    • Added correspondiing importFrom(Rcpp, evalCpp) to NAMESPACE
    • Noted in all inst/examples/*/Makefile that Rcpp no longer requires a library argument, but left code for backwards compatibility in case 0.11.0 is not yet installed.
  • Added --vanilla --slave to default arguments for R initialization
  • Added a few more explicit #include statements in the qt example which Qt 5.1 now appears to require with thanks to Spencer Behling for the patch
  • Added new MPI example with worker functions and RInside instance, kindly contributed by Nicholas Pezolano and Martin Morgan
CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

3 February 2014

Dirk Eddelbuettel: Rcpp 0.11.0

A new major release 0.11.0 of Rcpp is now on the CRAN network for GNU R and binaries for Debian have been uploaded as well. Before you read on, please note that this release will most likely require a reinstallation of all your packages using Rcpp as it now works without a user-facing shared library. The biggest change in this version is that it is now effectively headers-only. Before you wonder, there is still compiled code provided by Rcpp for use by packages. But this uses the function registration mechanism of GNU R instead via instantiation at package startup. This does make package building easier, will remove the need to query GNU R for the Rcpp library file in src/Makevars when compiling, and should generally avoid building issues such as the dreaded fails on paths with spaces still annoying users of a certain OS. There are a number of other fixes, upgrades and other extensions detailed in NEWS file extract below, in the ChangeLog file in the package and on the Rcpp Changelog page as well as in a release announcement I'll post later.
Changes in Rcpp version 0.11.0 (2014-02-02)
  • Changes in Rcpp API:
    • Functions provided/used by Rcpp are now registered with R and instantiated by client package alleviating the need for explicit linking against libRcpp which is therefore no longer created.
    • Updated the Rcpp.package.skeleton() function accordingly.
    • New class StretchyList for pair lists with fast addition of elements at the front and back. This abstracts the 3 functions NewList, GrowList and Insert used in various packages and in parsers in R.
    • The function dnt, pnt, qnt sugar functions were incorrectly expanding to the no-degree-of-freedoms variant.
    • Unit tests for pnt were added.
    • The sugar table function did not handle NAs and NaNs properly for numeric vectors. Fixed and tests added.
    • The internal coercion mechanism mapping numerics to strings has been updated to better match R (specifically with Inf, -Inf, and NaN.)
    • Applied two bug fixes to Vector sort() and RObject definition spotted and corrected by Kevin Ushey
    • New checkUserInterrupt() function that provides a C++ friendly implementation of R_CheckUserInterrupt.
  • Changes in Rcpp attributes:
    • Embedded R code chunks in sourceCpp are now executed within the working directory of the C++ source file.
    • Embedded R code chunks in sourceCpp can now be disabled.
  • Changes in Rcpp documentation:
    • The Rcpp-FAQ and Rcpp-package vignettes have been updated and expanded.
    • Vignettes are now typeset with grey background for code boxes.
    • The bibtex reference file has been update to reflexct current package versions.
  • Changes in Rcpp unit tests:
    • The file tests/doRUnit.R was rewritten following the pattern deployed in RProtoBuf which is due to Murray Stokely
    • The function test() was rewritten; it provides an easy entry point to running unit tests of the installed package
Thanks to CRANberries, you can also look at a diff to the previous release 0.10.6. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 January 2014

Dirk Eddelbuettel: RProtoBuf 0.4.0: A whole lot of goodies and Windoze support

A new major release 0.4.0 of RProtoBuf, is now on CRAN. RProtoBuf provides GNU R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects. With this release, we are welcoming Jeroen Ooms to the team. Jeroen had already worked on some RProtoBuf extensions in the context of his OpenCPU project; we have now integrated those Protocol Buffers functions. And Jeroen pushed all the right buttons to finally get RProtoBuf built on everybody's least-favourite operating system by provding a static library for use by win-builder and CRAN. Murray once again did a lot of work on internals. His use of the LLVM tool llvm-format was particular helpful to make our coding style a little more consistent. The complete NEWS file entry for this release follows:
Changes in RProtoBuf version 0.4.0 (2014-01-14)
  • Changes to support CRAN builds for MS Windows.
  • Added functions serialize_pb, unserialize_pb, and can_serialize_pb plus documentation from Jeroen Ooms RProtoBufUtils package.
  • New dir inst/python with some Python examples.
  • Added Jeroen Ooms as author.
  • Vignettes have been converted to the R 3.0.0 or later use of external vignette builders, no longer need a Makefile
  • Added missing methods to dollar completion list for Message, Descriptor, EnumValueDescriptor, and FileDescriptor classes.
  • Add missing export for .DollarNames EnumValueDescriptor to allow completion on that class.
  • Add more than 15 additional pages to the main Intro vignette documenting better all of the S4 classes implemented by RProtoBuf, updating the type mapping tables to take into account 64-bit support, and documenting advanced features such as Extensions.
  • Added better error checking in EnumDescriptors to catch the case when wrong types are provided.
  • Updated the FileDescriptor name() method to accept a boolean for full paths just like the generic name() method.
  • Correct a bug that incorrectly dispatched as.character() when as.list() was called on Descriptor objects.
  • Update FileDescriptor $ dispatch to work properly for the names of fields defined in the FileDescriptor, instead of just returning NULL even for types returned by $ completion.
  • Added a reservation for extension fields in the example tutorial.Person schema.
  • Support setting int32 fields with character representations and raise an R-level stop() error if the provided string can not be parsed as a 32-bit integer, rather than crashing the R instance.
  • Update the project TODO file.
  • Add better documentation and tests for all of the above.
  • Corrected the handling of uint32 and fixed32 types in protocol buffers to ensure that they work with numbers as large as 2^32 - 1, which is larger than an integer can hold in R since R does not have an unsigned integer class. These values are stored as doubles internally now to avoid losing precision.
  • Added unit tests to verify behavior of RProtoBuf with extreme values for uint32 types.
  • Removed old exception handling code and instead rely on the more modern Rcpp::stop method maintained in Rcpp.
  • Add better error messages when setting a repeated field of messages to inform the user which element index was of the wrong type and what the expected type was.
  • Add an optional 'partial' argument to readASCII allowing uninitialized message fragments to be read in.
  • (internal) Added const qualifiers in more places throughout the C++ code for type safety.
  • (internal) Standardize coding conventions of the C++ files and run them through clang-format for consistency. A STYLE file has been submitted to R-Forge with details about the coding standards and how they are enforced with Emacs and clang-format.
  • Applied changes suggested by Kevin Ushey to the S4 class handling to support both the currently released Rcpp as well as the currently pending next version.
CRANberries also provides a diff to the previous release 0.3.2. More information is at the RProtoBuf page which has a draft package vignette, a 'quick' overview vignette and a unit test summary vignette. Questions, comments etc should go to the rprotobuf mailing list off the RProtoBuf page at R-Forge.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 January 2014

James Bromberger: Linux.conf.au 2014: LCA TV

The radio silence here on my blog has been not from lack of activity, but the inverse. Linux.conf.au chewed up the few remaining spare cycles I have had recently (after family and work), but not from organising the conference (been there, got the T-Shirt and the bag). So, let s do a run through of what has happened LCA2014 Perth has come and gone in pretty smooth fashion. A remarkable effort from the likes of the Perth crew of Luke, Paul, Euan, Leon, Jason, Michael, and a slew of volunteers who stepped up not to mention our interstate firends of Steve and Erin, Matthew, James I, Tim the Streaming guy and others, and our pro organisers at Manhattan Events. It was a reasonably smooth ride: the UWA campus was beautiful, the leacture theatres were workable, and the Octogon Theatre was at its best when filled with just shy of 500 like minded people and an accomplished person gracing the stage. What was impressive (to me, at least) was the effort of the AV team (which I was on the extreme edges of); videos of keynotes hit the Linux Australia mirror within hours of the event. Recording and live streaming of all keynotes and sessions happend almost flawlessly. Leon had built a reasonably robust video capture management system (eventstreamer on github) to ensure that people fresh to DVswitch had nothing break so bad it didn t automatically fix itself and all of this was monitored from the Operations Room (called the TAVNOC, which would have been the AV NOC, but somehow a loose reference to the UWA Tavern the Tav crept in there). Some 167 videos were made and uploaded most of this was also mirrored on campus before th end of the conference so attendees could load up their laptops with plenty of content for the return trip home. Euan s quick Blender work meant there was a nice intro and outro graphic, and Leon s scripting ensured that Zookeepr, the LCA conference manegment software, was the source of truth in getting all videos processed and tagged correctly. I was scheduled (and did give) a presentation at LCA 2014 about Debian on Amazon Web Services (on Thursday), and attended as many of the sessions as possible, but my good friend Michael Davies (LCA 2004 chair, and chair of the LCA Papers Committee for a good many years) had another role for this year. We wanted to capture some of the hallway track of Linux.conf.au that is missed in all the videos of presentations. And thus was born LCA TV. LCA TV consisted of the video equipment for an additional stream mixer host, cameras, cables and switches, hooking into the same streaming framework as the rest of the sessions. We took over a corner of the registration room (UWA Undercroft), brought in a few stage lights, a couch, coffee table, seat, some extra mics, and aimed to fill the session gaps with informal chats with some of the people at Linux.conf.au speakers, attendees, volunteers alike. And come they did. One or two interviews didn t succeed (this was an experiment), but in the end, we ve got over 20 interviews with some interesting people. These streamed out live to the people watching LCA from afar; those unable to make it to Perth in early January; but they were recorded too and we can start to watch them (see below) I was also lucky enough to mix the video for the three keynotes as well as the opening and closing, with very capable crew around the Octogon Theatre. As the curtain came down, and the 2014 crew took to the stage to be congratulated by the attendees, I couldn t help but feel a little bit proud and a touch nostalgic memories from 11 years earlier when LCA 2003 came to a close in the very same venue. So, before we head into the viewing season for LCA TV, let me thank all the volunteers who organised, the AV volunteers, the Registration volunteers, the UWA team who helped with Octogon, Networking, awesome CB Radios hooked up to the UWA repeated that worked all the way to the airport. Thanks to the Speakers who submitted proposals. The Speakers who were accepted, made the journey and took to the stage. The people who attended. The sponsors who help make this happen. All of the above helps share the knowledge, and ultimately, move the community forward. But my thanks to Luke and Paul for agreeing to stand there in the middle of all this madness and hive of semi structured activity that just worked. Please remember this was experimental; the noise was the buzz of the conference going on around us. There was pretty much only one person on the AV kit my thanks to Andrew Cooks who I ll dub as our sound editor, vision director, floor manager, and anything else. So who did we interview? One or two talks did not work, so appologies to those that are missing. Here s the playlist to start you off! Enjoy.

31 December 2013

Paul Tagliamonte: Hy 0.9.12 released

Good morning all my hungover friends. New Hy release - sounds like the perfect thing to do while you re waiting for your headaches to go away. Here s a short-list of the changes (from NEWS) - enjoy!
Changes from Hy 0.9.11
   tl;dr:
    0.9.12 comes with some massive changes,
    We finally took the time to implement gensym, as well as a few
    other bits that help macro writing. Check the changelog for
    what exactly was added. 
    
    The biggest feature, Reader Macros, landed later
    in the cycle, but were big enough to warrent a release on it's
    own. A huge thanks goes to Foxboron for implementing them
    and a massive hug goes out to olasd for providing ongoing
    reviews during the development.
    
    Welcome to the new Hy contributors, Henrique Carvalho Alves,
    Kevin Zita and Kenan B l kba . Thanks for your work so far,
    folks!
    
    Hope y'all enjoy the finest that 2013 has to offer,
      - Hy Society
    * Special thanks goes to Willyfrog, Foxboron and theanalyst for writing
      0.9.12's NEWS. Thanks, y'all! (PT)
    [ Language Changes ]
    * Translate foo? -> is_foo, for better Python interop. (PT)
    * Reader Macros!
    * Operators + and * now can work without arguments
    * Define kwapply as a macro
    * Added apply as a function
    * Instant symbol generation with gensym
    * Allow macros to return None
    * Add a method for casting into byte string or unicode depending on python version
    * flatten function added to language
    * Add a method for casting into byte string or unicode depending on python version
    * Added type coercing to the right integer for the platform
    [ Misc. Fixes ]
    * Added information about core team members
    * Documentation fixed and extended
    * Add astor to install_requires to fix hy --spy failing on hy 0.9.11.
    * Convert stdout and stderr to UTF-8 properly in the run_cmd helper.
    * Update requirements.txt and setup.py to use rply upstream.
    * tryhy link added in documentation and README
    * Command line options documented
    * Adding support for coverage tests at coveralls.io
    * Added info about tox, so people can use it prior to a PR
    * Added the start of hacking rules
    * Halting Problem removed from example as it was nonfree
    * Fixed PyPI is now behind a CDN. The --use-mirrors option is deprecated.
    * Badges for pypi version and downloads.
    [ Syntax Fixes ]
    * get allows multiple arguments
    [ Bug Fixes ]
    *  OSX: Fixes for readline Repl problem which caused HyREPL not allowing 'b'
    * Fix REPL completions on OSX
    *  Make HyObject.replace more resilient to prevent compiler breakage.
    [ Contrib changes ]
    * Anaphoric macros added to contrib
    * Modified eg/twisted to follow the newer hy syntax
    * Added (experimental) profile module

21 December 2013

Daniel Kahn Gillmor: Kevin M. Igoe should step down from CFRG Co-chair

I've said recently that pervasive surveillance is wrong. I don't think anyone from the NSA should have a leadership position in the development or deployment of Internet communications, because their interests are at odds with the interest of the rest of the Internet. But someone at the NSA is in exactly such a position. They ought to step down. Here's the background: The Internet Research Task Force (IRTF) is a body tasked with research into underlying concepts, themes, and technologies related to the Internet as a whole. They act as a research organization that cooperates and complements the engineering and standards-setting activities of the Internet Engineering Task Force (IETF). The IRTF is divided into issue-specific research groups, each of which has a Chair or Co-Chairs who have "wide discretion in the conduct of Research Group business", and are tasked with organizing the research and discussion, ensuring that the group makes progress on the relevant issues, and communicating the general sense of the results back to the rest of the IRTF and the IETF. One of the IRTF's research groups specializes in cryptography: the Crypto Forum Research Group (CFRG). There are two current chairs of the CFRG: David McGrew <mcgrew@cisco.com> and Kevin M. Igoe <kmigoe@nsa.gov>. As you can see from his e-mail address, Kevin M. Igoe is affiliated with the National Security Agency (NSA). The NSA itself actively tries to weaken cryptography on the Internet so that they can improve their surveillance, and one of the ways they try to do so is to "influence policies, standards, and specifications". On the CFRG list yesterday, Trevor Perrin requested the removal of Kevin M. Igoe from his position as Co-chair of the CFRG. Trevor's specific arguments rest heavily on the technical merits of a proposed cryptographic mechanism called Dragonfly key exchange, but I think the focus on Dragonfly itself is the least of the concerns for the IRTF. I've seconded Trevor's proposal, and asked Kevin directly to step down and to provide us with information about any attempts by the NSA to interfere with or subvert recommendations coming from these standards bodies. Below is my letter in full:
From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
To: cfrg@ietf.org, Kevin M. Igoe <kmigoe@nsa.gov>
Date: Sat, 21 Dec 2013 16:29:13 -0500
Subject: Re: [Cfrg] Requesting removal of CFRG co-chair
On 12/20/2013 11:01 AM, Trevor Perrin wrote:
> I'd like to request the removal of Kevin Igoe from CFRG co-chair.
Regardless of the conclusions that anyone comes to about Dragonfly
itself, I agree with Trevor that Kevin M. Igoe, as an employee of the
NSA, should not remain in the role of CFRG co-chair.
While the NSA clearly has a wealth of cryptographic knowledge and
experience that would be useful for the CFRG, the NSA is apparently
engaged in a series of attempts to weaken cryptographic standards and
tools in ways that would facilitate pervasive surveillance of
communication on the Internet.
The IETF's public position in favor of privacy and security rightly
identifies pervasive surveillance on the Internet as a serious problem:
https://www.ietf.org/media/2013-11-07-internet-privacy-and-security.html
The documents Trevor points to (and others from similar stories)
indicate that the NSA is an organization at odds with the goals of the IETF.
While I want the IETF to continue welcoming technical insight and
discussion from everyone, I do not think it is appropriate for anyone
from the NSA to be in a position of coordination or leadership.
----
Kevin, the responsible action for anyone in your position is to
acknowledge the conflict of interest, and step down promptly from the
position of Co-Chair of the CFRG.
If you happen to also subscribe to the broad consensus described in the
IETF's recent announcement -- that is, if you care about privacy and
security on the Internet -- then you should also reveal any NSA activity
you know about that attempts to subvert or weaken the cryptographic
underpinnings of IETF protocols.
Regards,
	--dkg
I'm aware that an abdication by Kevin (or his removal by the IETF chair) would probably not end the NSA's attempts to subvert standards bodies or weaken encryption. They could continue to do so by subterfuge, for example, or by private influence on other public members. We may not be able to stop them from doing this in secret, and the knowledge that they may do so seems likely to cast a pall of suspicion over any IETF and IRTF proceedings in the future. This social damage is serious and troubling, and it marks yet another cost to the NSA's reckless institutional disregard for civil liberties and free communication. But even if we cannot rule out private NSA influence over standards bodies and discussion, we can certainly explicitly reject any public influence over these critical communications standards by members of an institution so at odds with the core principles of a free society. Kevin M. Igoe, please step down from the CFRG Co-chair position. And to anyone (including Kevin) who knows about specific attempts by the NSA to undermine the communications standards we all rely on: please blow the whistle on this kind of activity. Alert a friend, a colleague, or a journalist. Pervasive surveillance is an attack on all of us, and those who resist it are heroes.

29 October 2013

Soeren Sonnenburg: Shogun Toolbox Version 3.0 released!

Dear all, we are proud to announce the 3.0 release of the Shogun Machine-Learning Toolbox. This release features the incredible projects of our 8 hard-working Google Summer of Code students. In addition, you get other cool new features as well as lots of internal improvements, bugfixes, and documentation improvements. To speak in numbers, we got more than 2000 commits changing almost 400000 lines in more than 7000 files and increased the number of unit tests from 50 to 600. This is the largest release that Shogun ever had! Please visit http://shogun-toolbox.org/ to obtain Shogun. News Here is a brief description of what is new, starting with the GSoC projects, which deserve most fame: Screenshots Everyone likes screenshots. Well, we have got something better! All of the above projects (and more) are now documented in the form of IPython notebooks, combining machine learning fundamentals, code, and plots. Those are a great looking way that we chose to document our framework from now on. Have a look at them and feel free to submit your use case as a notebook! FGM.html GMM.html HashedDocDotFeatures.html LMNN.html SupportVectorMachines.html Tapkee.html bss_audio.html bss_image.html ecg_sep.html gaussian_processes.html logdet.html mmd_two_sample_testing.html The web-demo framework has been integrated into our website, go check them out. Other changes We finally moved to the Shogun build process to CMake. Through GSoC, added a general clone and equals methods to all Shogun objects, and added automagic unit-testing for serialisation and clone/equals for all classes. Other new features include multiclass LDA, and probability outputs for multiclass SVMs. For the full list, see the NEWS. Workshop Videos and slides In case you missed the first Shogun workshop that we organised in Berlin last July, all of the talks have been put online. Shogun in the Cloud As setting up the right environment for shogun and installing it was always one of the biggest problems for the users (hence the switching to CMake), we have created a sandbox where you can try out shogun on your own without installing shogun on your system! Basically it's a web-service which give you access to your own ipython notebook server with all the shogun notebooks. Of course you are more than welcome to create and share your own notebooks using this service! *NOTE*: This is a courtesy service created by Shogun Toolbox developers, hence if you like it please consider some form of donation to the project so that we can keep up this service running for you. Try shogun in the cloud. Thanks The release has been made possible by the hard work of all of our GSoC students, see list above. Thanks also to Thoralf Klein and Bj rn Esser for the load of great contributions. Last but not least, thanks to all the people who use Shogun and provide feedback. S ren Sonnenburg on behalf of the Shogun team (+ Viktor Gal, Sergey Lisitsyn, Heiko Strathmann and Fernando Iglesias)

5 September 2013

Daniel Pocock: Will Baby Boomers strangle Australia's Internet?

When Labor's Communications Minister Stephen Conroy won the title Internet villain of the year most educated people realised it wasn't something to celebrate. So it might be surprising for some that the title could potentially be returning to Australia so soon... Australia's main opposition party, the deceptively named Liberals (who actually lurk at the Marine Le Pen grade of conservatism) have just announced plans to resurrect the plan by implanting filtering technology in all modems, routers and smartphones. Whether they do or don't, there are serious concerns that they have described Australia's still-under-construction national broadband network as a wasteful project (which is potentially true) but failed to provide any credible alternative: who really thinks the conservative's proposed 25MBps will be a serious technology when they finish deploying it in 2019?. It is interesting to look at the bigger picture to see where this stupidity comes from. A political poll released early this week gives us half the story: Big bad baby boomers It is obvious that older voters prefer the conservative "coalition" parties, 57% of over-55s in particular. It is a huge jump from the 44% of voters in the age group below them. What this table doesn't tell us is that Australia's population is top-heavy with Baby Boomers. In other words, there are actually quite a lot of those over-55-year-old voters. In fact, it is estimated that close to 1 in 3 Australians will be in this category within the next 10 years. People who want the world to be the way it was in the 60s: many have already paid off a mortgage on a home in the suburbs with no access to things like public transport. They grew up under the white Australia policy and are frustrated at the presence of educated hard working foreigners from countries like India and China. Retired, no longer working, expecting the state to provide roads to their poorly located McMansions and free health care but little concern for broadband or anything that smells new. As they are no longer working, many have no commercial demand for broadband but will happily vote for a policy to censor it even if they don't use it themselves, simply because they want to see a Government that is blocking change. Sadly, they are shooting themselves in the foot: younger generations of Australians need to work and pay tax if there is to be any hope of keeping the dream alive for older Australians. However, with the conservatives cutting education to fund an over-dramatised response to immigration that costs billions of dollars every year, cutting corners and dumbing-down essential technology like the Internet, it seems that working-age Australians will struggle to compete in the global marketplace.

Keith Packard: Airfest-altimeter-testing

Altimeter Testing at Airfest Bdale and I, along with AJ Towns and Mike Beattie, spent last weekend in Argonia, Kansas, flying rockets with our Kloudbusters friends at Airfest 19. We had a great time! AJ and Mike both arrived a week early at Bdale s to build L3 project airframes, and both flew successful cert flights at Airfest! Airfest was an opportunity for us to test fly prototypes of new flight electronics Bdale and I have spent the last few weeks developing, and I thought I d take a few minutes today to write some notes about what we built and flew. TeleMega We ve been working on TeleMega for quite a while. It s a huge step up in complexity from our original TeleMetrum, as it has a raft of external sensors and six pyro circuits. Bdale flew TeleMega in his new fiberglass 4 airframe on a Loki 75mm blue M demo motor. GPS tracking was excellent; you can see here that GPS altitude tracked the barometric sensor timing exactly: GPS lost lock when the motor lit, but about 3 seconds after motor burnout, it re-acquired the satellite signals and was reporting usable altitude data right away. The GPS reported altitude was higher than the baro sensor, but that can be explained by our approximation of an atmospheric model used to convert pressure into altitude. The rest of the flight was also nominal; TeleMega deployed drogue and main chutes just fine. TeleMetrum We ve redesigned TeleMetrum. The new version uses better sensors (MS5607 baro sensor, MMA6555 accelerometer) and a higher power radio (CC1120 40mW). The board is the same size, all the connectors are in the same places so it s a drop-in replacement, and it s still got two pyro channels and USB for configuration, data download and battery charging. I loaded up my Candy-Cane airframe with a small 5 grain 38mm CTI classic: The flight computer worked perfectly, but GPS reception was not as good as we d like to see: Given how well TeleMega was receiving GPS signals, I m hopeful that we ll be able to tweak TeleMetrum to improve performance. TeleMini We ve also redesigned TeleMini. It s still a two-channel flight computer with logging and telemetry, but we ve replaced the baro sensor with the MS5607, added on-board flash for increased logging space and added on-board screw terminals for an external battery and power switch. You can still use one of our 3.7V batteries, but you can also use another battery providing from 3.7 to 15V. I was hoping to finish up the firmware and fly it, but I ran out of time before the launch. The good news is that all of the components of the board have been tested and work correctly, and the firmware is feature complete , meaning we ve gotten all of the features coded, it s just not quite working yet. EasyMini EasyMini is a new product for us. It s essentially the same as a TeleMini, but without a radio. Two channels, baro-only, with logging. Like TeleMini, it includes an on-board USB connector and can use either one of our 3.7V batteries, or an external battery from 3.7V to 15V. EasyMini and TeleMini are the same size, and have holes in the same places, so you can swap between them easily. I flew EasyMini in my Koala airframe with a 29mm 3 grain CTI blue-streak motor. EasyMini successfully deployed the main chute and logged flight data: We also sent a couple of boards home with Kevin Trojanowski and Greg Rothman for them to play with. TeleGPS TeleGPS is a GPS tracker, incorporating a u-blox Max receiver and a 70cm transmitter. It can send position information via APRS or our usual digital telemetry formats. I was also hoping to have the TeleGPS firmware working, and I spent a couple of nights in the motel coding, but didn t manage to finish up. So, no data from this board either. Production Plans Given the success of the latest TeleMega prototype, we re hoping to have it into production first. We ll do some more RF testing on the bench with the boards to make sure it meets our standards before sending it out for the first production run. The goal is to have TeleMega ready to sell by the end of October. TeleMetrum clearly needs work on the layout to improve GPS RF performance. With the testing equipment that Bdale is in the midst of re-acquiring, it should be possible to finish this up fairly soon. However, the flight firmware looks great, so we re hoping to get these done in time to sell by the end of November. TeleMini is looking great from a hardware perspective, but the firmware needs work. Once the firmware is running, we ll need to make enough test flights to shake out any remaining issues before moving forward with it. EasyMini is also looking finished; I ve got a stack of prototypes and will be getting people to fly them at my local launch in another couple of weeks. The plan here is to build a small batch by hand and get them into the store once we re finished testing, using those to gauge interest before we pay for a larger production run.

23 July 2013

Daniel Pocock: If Assange is a rapist, what type of monsters are running Australia?

It's almost one year since Julian Assange was granted asylum by the republic of Ecuador. Newspapers often incorrectly report that he sought political asylum to avoid those disturbing rape allegations (they are not charges). The real basis of his claim to asylum, which is the foundation that asylum law is based on, is that Mr Assange is not safe from political persecution in his own country. Let's not talk about rape It's often difficult for men to talk about this subject without putting their foot in their mouth. In this case, it may be easier to understand by trying to put it in context. Safer with Mr Assange?
Just last week, our Government announced plans to expand their program of dumping poor coloured people into random countries around the Asia-Pacific region. On Monday they broke every rule in the UN's book of human rights by using the Internet to distribute gut wrenching youtube videos of a woman suffering at the hands of our immigration officers. They hope this degradation will scare away other migrants who do not meet the right economic criteria. It makes me wonder, if Mr Assange is genuinely guilty of rape despite his alleged victim boasting about her experience on Twitter, what has happened to this other woman in the care of Kevin Rudd's scorched-Earth immigration policy? Maybe because she's not a Swedish blonde, nobody cares. More racist videos from the immigration department fail to correctly inform migrants of their right to seek asylum under international law - is it possible that this offensive and misleading propaganda violates Youtube's terms of service? Crimes so bad that whistle-blowers are inevitable Former employees of the Manus Island concentration camp have stepped forward and revealed what the Government naturally doesn't want us to know. According to one former security manager, vulnerable migrants had suffered virtually every form of physical abuse, including rapes, and under orders from management in Australia, the victims were simply locked up with their abusers and no formal complaints were documented. All of a sudden, those pictures of the woman crying in the degrading video made by immigration officials take on a new meaning: she is next in line for the same destination, PNG. I somewhat suspect she would enjoy greater dignity and safety confined to Ecuador's London embassy with the so-called rapist that everybody has been distracted with. Can anybody imagine the widely discussed expression on her face improving after 20 months enforced confinement with rapists under the protection of Australia's refugee program?

Next.

Previous.